Previous Blogs

May 14, 2025
Google Ups Privacy and Intelligence Ante with Latest Android Updates

April 30, 2025
Intel Pushes Foundry Business Forward

April 29, 2025
Chip Design Hits AI Crossover Point

April 24, 2025
Adobe Broadens Firefly’s Creative AI Reach

April 9, 2025
Google Sets the Stage for Hybrid AI with Cloud Next Announcements

April 1, 2025
New Intel CEO Lays out Company Vision

March 21, 2025
Nvidia Positions Itself as AI Infrastructure Provider

March 13, 2025
Enterprise AI Will Go Nowhere Without Training

February 18, 2025
The Rapid Rise of On-Device AI

February 12, 2025
Adobe Reimagines Generative Video with Latest Firefly

January 22, 2025
Samsung Cracks the AI Puzzle with Galaxy S25

January 8, 2025
Nvidia Brings GenAI to the Physical World with Cosmos

2024 Blogs

2023 Blogs

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

May 19, 2025
Microsoft Brings AI Agents to Life

By Bob O'Donnell

As exciting as the world of AI-powered large language models (LLMs) may be, it’s clear that the tech industry is now moving beyond the core capabilities that these generative AI tools enable and moving into AI-powered agents. At the company’s developer-focused Build event, Microsoft made this second stage of AI evolution clear with a huge range of announcements that ultimately point to the fact that software agents can take LLM capabilities into even more sophisticated and farther-reaching applications.

The buzzword/buzzphrase Microsoft used at Build is the agentic web, but the agent-based opportunities that the company described aren’t limited to the web or cloud-based applications but extend to Windows and other client-based applications as well. In fact, several of the more interesting and potentially impactful announcements were related to AI agents and applications running on PCs.

In the process of building out its agent vision, Microsoft announced a range of tools that allow developers to more easily create agents and debuted several new prebuilt agents. They also discussed capabilities for organizing and orchestrating the actions of multiple agents. To top it off, they even described mechanisms for treating agents as “digital employees” complete with identities and access rights that can be managed in the company’s Entra digital identity and authentication framework.

On the development side, Microsoft debuted the Github Copilot coding agent, which is designed to make the process of creating AI-powered applications and other agents much easier. Microsoft described the Copilot coding agent as “an agentic partner,” likening it to a co-worker that can help out with certain portions of a software development project, such as refactoring old code or fixing bugs.

For non-programmer users interested in building agents, Microsoft also debuted a set of low-code/no code tools for agent creation, including Copilot Studio. In addition, the company talked about the ability to create what are called Computer Use Agents (CUAs), which allow an agent to do actions across a computer screen in the way a human being would. CUAs can interact with web sites and other applications in a way that isn’t possible by just using application APIs.

With the launch of Copilot Tuning, Microsoft is making it simple for end users to use their own content to fine tune an existing LLM and use it to create their own agent capable of performing tasks that they typically do. So, for example, the ability to write text in a particular individual’s style or to allow the specialized knowledge of an organization to be integrated into the text creation tools of a given LLM are something that a wider range of people could start to take advantage of.

Conceptually, this is very similar to the idea of a personal RAG (Retrieval Augmented Generation) tool, which was discussed a great deal over the last year or so but didn’t ever really arrive as a mainstream product. The agent-based implementation that Microsoft is enabling with Copilot Tuning makes the process more straightforward by allowing individuals to simply select a set of documents they want to add to the model’s training set, which should allow it to have a bigger impact.

One of the key themes that Microsoft emphasized at Build was how the combination of multiple agents could enable even more powerful capabilities. To achieve that, the company discussed orchestration mechanisms for linking and coordinating the actions that different agents can perform. In Copilot Studio, for example, developers can link the actions of several agents together, allowing them to handle more complex tasks.

The announcement that offered the most mind-blowing impact is around the seemingly innocuous idea of registering agents within Entra. What’s amazing about this, however, is that it essentially elevates a piece of autonomously running software into the equivalent of a digital employee. While the practical realities of how this gets deployed and how far the capabilities and tasks performed by these “digital employees” will go is still a bit unclear, the very fact that these types of ideas are in consideration shows how groundbreaking—and disruptive—the world of agents can truly be. Interestingly, in his keynote the night before at Computex, Nvidia CEO Jensen Huang also discussed the concept of agents as digital employees. Obviously, there’s still much more to consider around this topic, but it’s unquestionably an eye-opener with enormous implications for future workplaces.

In addition to offering a range of new agent-related tools, Microsoft also made several significant announcements related to developing standards at Build. In particular, the company threw its weight behind both the Model Context Protocol (MCP) and Agent-2-Agent (A2A) standards in a big way. MCP provides a new way to interact with LLMs across different models and even different computing environments, while A2A offers a standard way to have different agents communicate and interact with each other. Microsoft is bringing MCP support into a huge range of new products including GitHub, Copilot Studio, Dynamics 365, Azure AI Foundry, Semantic Kernel and, most interestingly, Windows 11. The combination of all these tools along with the fact that Microsoft is joining the MCP Steering Committee is likely to bring even more momentum and enthusiasm to this quickly growing standard. Also, by enabling A2A support, Microsoft is strongly encouraging the creation of an open ecosystem for agents, which will be critical to ensure their widespread usage and success.

Speaking of open, Microsoft also announced support for a wider range of models across most all of its development tools. While the company has yet to announce its own LLMs—it does have the Phi family of Small Language Models (SLMs)—the fact that developers can now choose from hundreds of models in Azure AI Foundry seems to imply that the company is moving away from its initial dependence on OpenAI’s models and broadening its support for more choices. It wouldn’t be the least bit surprising if, in the not-to-distant future, that includes a family of its own LLMs.

For Windows developers, Microsoft debuted several capabilities designed to make it easier to create and run AI-powered agents and applications on PCs, as well as leverage the diverse range of increasingly powerful silicon options now available in Copilot+ PCs. With the Windows Foundry—the successor to the Windows ML Runtime—Microsoft has built a tool that can address the challenge that Windows app developers have been facing because of all the different NPU and GPU architectures across chips from Qualcomm, Intel, AMD and Nvidia. By essentially providing a translation layer that can adapt an application’s code to run efficiently on whatever silicon accelerators are present in a given system, Windows Foundry should encourage the development of more accelerated AI Windows apps.

The company also included something they call Local Foundry which not only opens the range of models that developers can use or take advantage of within their application but also provides connections to other platforms including Nvidia NIMs. Thanks to Nvidia’s newly announced TensorRT for RTX PCs, software developers will be able to take CUDA applications and run them on PCs with Nvidia RTX GPUs, opening up yet another mechanism for bringing AI-accelerated applications to PCs.

Finally, with the previously mentioned support for MCP in Windows 11, the possibility for AI agents to serve as a go-between across different types of applications that register themselves as MCP Servers means that multi-step, multi-application workflows can start to be automated via digital agents. While most of that work will likely initially happen within a given PC, the MCP protocol also enables the ability to split up these tasks across multiple different locations and environments, opening up the possibility for sophisticated distributed computing, hybrid AI applications.

As with most Microsoft Build events, the company managed to pack in a staggering array of announcements that, at first glance, are difficult to make sense of. But what’s starting to become clear is that agents and the tools and protocols that are allowing them to develop are leading us into an exciting new era of AI development. These new agents are taking the whiz-bang nature of the first generation GenAI chatbots and directing them towards more organized and more powerful AI powered applications. They’re even driving the creation of digital “co-workers” that are bound to have a huge impact on how organizations work and employees get things done.

Here's a link to the original column: https://www.linkedin.com/pulse/microsoft-brings-ai-agents-life-bob-o-donnell-zwxkc

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.